Goto

Collaborating Authors

 high-stake decision


Sibyl: Explaining Machine Learning Models for High-Stakes Decision Making

#artificialintelligence

As machine learning is applied to an increasingly large number of domains, the need for an effective way to explain its predictions grows apace. In the domain of child welfare screening, machine learning offers a promising method of consolidating the large amount of data that screeners must look at, potentially improving the outcomes for children reported to child welfare departments. Interviews and case-studies suggest that adding an explanation alongside the model prediction may result in better outcomes, but it is not obvious what kind of explanation would be most useful in this context. Through a series of interviews and user studies, we developed Sibyl, a machine learning explanation dashboard specifically designed to aid child welfare screeners' decision making. When testing Sibyl, we evaluated four different explanation types, and based on this evaluation, decided a local feature contribution approach was most useful to screeners.


The False Comfort of Human Oversight as an Antidote to A.I. Harm

Slate

In April, the European Commission released a wide-ranging proposed regulation to govern the design, development, and deployment of A.I. systems. The regulation stipulates that "high-risk A.I. systems" (such as facial recognition and algorithms that determine eligibility for public benefits) should be designed to allow for oversight by humans who will be tasked with preventing or minimizing risks. Often expressed as the "human-in-the-loop" solution, this approach of human oversight over A.I. is rapidly becoming a staple in A.I. policy proposals globally. And although placing humans back in the "loop" of A.I. seems reassuring, this approach is instead "loopy" in a different sense: It rests on circular logic that offers false comfort and distracts from inherently harmful uses of automated systems. A.I. is celebrated for its superior accuracy, efficiency, and objectivity in comparison to humans.


Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

#artificialintelligence

Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.